Skip to content

tests: refactor POST /setup/untag tests (API happy path + untag_setup)#304

Open
igennova wants to merge 3 commits intoopenml:mainfrom
igennova:refactor/untag
Open

tests: refactor POST /setup/untag tests (API happy path + untag_setup)#304
igennova wants to merge 3 commits intoopenml:mainfrom
igennova:refactor/untag

Conversation

@igennova
Copy link
Copy Markdown
Contributor

Description

  • One happy-path check via the test client with a full expected JSON body.
  • Other cases call untag_setup directly.
  • Parametrized direct success for owner and admin; SQL uses bound parameters.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

Walkthrough

Tests for the setup untag flow were refactored: HTTP-layer regex assertions and the re import were removed; error cases for unknown setup, missing tag, and not-owned-tag were converted from HTTP-response checks to direct unit tests invoking untag_setup(...) and asserting SetupNotFoundError, TagNotFoundError, and TagNotOwnedError respectively. Success cases now include an HTTP test that seeds setup_tag and asserts HTTPStatus.OK and exact JSON, plus direct-call tests using OWNER_USER/ADMIN_USER that assert returned payloads and verify setup_tag row deletion. Tests now use user objects instead of API-key strings and pass expdb_db=expdb_test for direct calls.

Possibly related PRs

Suggested labels

tests

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: refactoring POST /setup/untag tests to include an API happy path test and direct untag_setup function tests.
Description check ✅ Passed The description clearly relates to the changeset, detailing the test refactor approach including the happy-path API test, direct function calls, parametrized tests, and SQL best practices.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • Several tests rely on hard-coded IDs and pre-existing DB state (e.g., setup_id=1 and uploader 2 without inserting a corresponding setup), which makes them more brittle; consider inserting the required setup rows explicitly in each test (or via a fixture) so they are fully self-contained.
  • The test_setup_untag_* file now mixes API-level tests and direct untag_setup unit-style tests; if this pattern grows, consider separating router/HTTP tests from service-level tests into different modules to keep test intent and responsibilities clearer.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Several tests rely on hard-coded IDs and pre-existing DB state (e.g., `setup_id=1` and uploader `2` without inserting a corresponding setup), which makes them more brittle; consider inserting the required setup rows explicitly in each test (or via a fixture) so they are fully self-contained.
- The `test_setup_untag_*` file now mixes API-level tests and direct `untag_setup` unit-style tests; if this pattern grows, consider separating router/HTTP tests from service-level tests into different modules to keep test intent and responsibilities clearer.

## Individual Comments

### Comment 1
<location path="tests/routers/openml/setups_untag_test.py" line_range="19-28" />
<code_context>
-async def test_setup_untag_unknown_setup(py_api: httpx.AsyncClient) -> None:
</code_context>
<issue_to_address>
**suggestion (testing):** Consider keeping at least one error-path test through the HTTP API to verify exception-to-HTTP mapping.

Previously, `test_setup_untag_unknown_setup` and `test_setup_untag_tag_not_found` hit the endpoint via `py_api` and asserted on HTTP status codes and error payloads. The new tests only call `untag_setup` directly and assert on custom exceptions, which no longer validates the HTTP contract (status codes like 404 vs 403, and the `detail` structure). Please reintroduce at least one negative-path test through `py_api` (e.g., unknown setup or tag-not-found) that asserts on the response status and JSON body to ensure routing and exception handling remain correct.

Suggested implementation:

```python
async def test_setup_untag_missing_auth(py_api: httpx.AsyncClient) -> None:
    assert response.json()["detail"] == "No API key provided."


@pytest.mark.mut
async def test_setup_untag_unknown_setup_api(
    py_api: httpx.AsyncClient,
    owner_api_key: ApiKey,
) -> None:
    unknown_setup_id = 999999999

    response = await py_api.delete(
        f"/openml/setups/{unknown_setup_id}/tags/some-tag",
        headers={"X-Api-Key": owner_api_key.key},
    )

    assert response.status_code == 404
    body = response.json()
    assert body["detail"]["code"] == "setup_not_found"
    assert body["detail"]["setup_id"] == unknown_setup_id


@pytest.mark.mut
async def test_setup_untag_api_success(

```

To make this compile and pass in your codebase, you may need to:
1. Ensure there is a fixture `owner_api_key: ApiKey` (or similar) that provides a valid API key for the owner user. If your fixtures differ, adjust the test signature and `headers=...` line accordingly (e.g., use `OWNER_USER.api_key_header` or whatever helper you already have).
2. Confirm the URL pattern for the untag endpoint; update `f"/openml/setups/{unknown_setup_id}/tags/some-tag"` to match your actual router path if it differs.
3. Align the expected error payload with your real exception-to-HTTP mapping. If your error response is shaped differently (e.g., `{"detail": "Setup not found"}` or `{"detail": {"type": "...", "msg": "..."}}`), update the `assert body["detail"][...]` lines to match the real structure.
4. If `httpx` or `pytest` are not already imported in this file, add the appropriate imports at the top: `import httpx` and `import pytest`.
</issue_to_address>

### Comment 2
<location path="tests/routers/openml/setups_untag_test.py" line_range="21-30" />
<code_context>
+        )


 @pytest.mark.mut
 @pytest.mark.parametrize(
-    "api_key",
-    [ApiKey.SOME_USER, ApiKey.ADMIN],
+    "user",
+    [SOME_USER, ADMIN_USER],
     ids=["Owner", "Administrator"],
 )
-async def test_setup_untag_success(
-    api_key: str, py_api: httpx.AsyncClient, expdb_test: AsyncConnection
-) -> None:
+async def test_setup_untag_direct_success(user: User, expdb_test: AsyncConnection) -> None:
+    tag = "setup_untag_via_direct"
     await expdb_test.execute(
-        text("INSERT INTO setup_tag (id, tag, uploader) VALUES (1, 'test_success_tag', 2)")
+        text("INSERT INTO setup_tag (id, tag, uploader) VALUES (1, :tag, 2);"),
+        parameters={"tag": tag},
     )

-    response = await py_api.post(
-        f"/setup/untag?api_key={api_key}",
-        json={"setup_id": 1, "tag": "test_success_tag"},
+    result = await untag_setup(
+        setup_id=1,
+        tag=tag,
+        user=user,
+        expdb_db=expdb_test,
</code_context>
<issue_to_address>
**suggestion (testing):** Strengthen the "not owned" test by explicitly asserting that the tag row remains in the database after the forbidden operation.

In `test_setup_untag_not_owned_by_you`, after asserting that `untag_setup` raises `TagNotOwnedError`, also query `setup_tag` and assert that the row `(id=1, tag=tag, uploader=2)` still exists. This ensures the forbidden operation does not delete or mutate the tag record before raising the error.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link
Copy Markdown

codecov bot commented Mar 31, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 93.15%. Comparing base (3f1a7ac) to head (eb289bd).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #304      +/-   ##
==========================================
+ Coverage   93.04%   93.15%   +0.10%     
==========================================
  Files          71       72       +1     
  Lines        2991     3037      +46     
  Branches      221      223       +2     
==========================================
+ Hits         2783     2829      +46     
+ Misses        152      151       -1     
- Partials       56       57       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
tests/routers/openml/setups_untag_test.py (2)

25-45: Consider extracting shared arrange/assert DB helpers to reduce repetition.

setup_tag insert + post-condition deletion checks are repeated across multiple tests; a tiny helper would keep these tests shorter and easier to evolve.

Also applies to: 73-77, 97-116

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/setups_untag_test.py` around lines 25 - 45, Extract
repeated DB arrange/assert into small helper functions to reduce duplication:
create helpers like insert_setup_tag(expdb_test, setup_id, tag, uploader=2) that
wraps the expdb_test.execute(text("INSERT INTO setup_tag ..."), parameters=...)
and assert_setup_tag_absent(expdb_test, setup_id, tag) that runs the SELECT used
in the post-condition and asserts rows.all() is empty; then update tests that
call expdb_test.execute and the SELECT assert (and the py_api.post usage with
ApiKey.SOME_USER) to use these helpers (or place them in conftest or the test
module) to keep tests shorter and consistent.

50-58: Keep one API-level negative-path smoke test to protect HTTP contract.

These direct-call tests validate domain logic well, but they no longer assert router exception mapping (status/code/detail) for negative paths. Consider retaining one lightweight API failure case (e.g., tag not found) so endpoint contract regressions are caught.

Suggested addition
+@pytest.mark.mut
+async def test_setup_untag_api_tag_not_found(py_api: httpx.AsyncClient) -> None:
+    missing_tag = "missing_tag_for_api_contract"
+    response = await py_api.post(
+        f"/setup/untag?api_key={ApiKey.SOME_USER}",
+        json={"setup_id": 1, "tag": missing_tag},
+    )
+    # Assert endpoint-level error contract (status + structured payload fields)
+    assert response.status_code != HTTPStatus.OK
+    assert "code" in response.json()
+    assert "detail" in response.json()

Also applies to: 60-68, 72-87

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/setups_untag_test.py` around lines 50 - 58, Add one
API-level negative-path test alongside the direct domain test: create a new test
(e.g., test_api_setup_untag_unknown_setup) that uses the HTTP test client
fixture to call the router endpoint that maps to untag_setup (simulate removing
tag from setup_id=999999/tag="test_tag"), then assert the HTTP response maps the
domain SetupNotFoundError to the expected status/code/detail (e.g., 404 and the
"Setup <id> not found." detail) to protect the endpoint contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/routers/openml/setups_untag_test.py`:
- Around line 25-45: Extract repeated DB arrange/assert into small helper
functions to reduce duplication: create helpers like
insert_setup_tag(expdb_test, setup_id, tag, uploader=2) that wraps the
expdb_test.execute(text("INSERT INTO setup_tag ..."), parameters=...) and
assert_setup_tag_absent(expdb_test, setup_id, tag) that runs the SELECT used in
the post-condition and asserts rows.all() is empty; then update tests that call
expdb_test.execute and the SELECT assert (and the py_api.post usage with
ApiKey.SOME_USER) to use these helpers (or place them in conftest or the test
module) to keep tests shorter and consistent.
- Around line 50-58: Add one API-level negative-path test alongside the direct
domain test: create a new test (e.g., test_api_setup_untag_unknown_setup) that
uses the HTTP test client fixture to call the router endpoint that maps to
untag_setup (simulate removing tag from setup_id=999999/tag="test_tag"), then
assert the HTTP response maps the domain SetupNotFoundError to the expected
status/code/detail (e.g., 404 and the "Setup <id> not found." detail) to protect
the endpoint contract.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 6cce0488-28f5-41e0-9124-7db5409189bc

📥 Commits

Reviewing files that changed from the base of the PR and between 3f1a7ac and 8818ff5.

📒 Files selected for processing (1)
  • tests/routers/openml/setups_untag_test.py

@igennova
Copy link
Copy Markdown
Contributor Author

@PGijsbers This PR is ready for review. Thank you!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/routers/openml/setups_untag_test.py (1)

71-92: Good test for the authorization error path.

The test correctly verifies:

  1. The exception is raised when a non-owner attempts removal
  2. The tag remains in the database after the failed operation

Minor observation: using OWNER_USER for a test where the user does not own the tag may be confusing to future readers. If this naming follows existing project conventions, consider adding a brief comment clarifying the relationship (e.g., # OWNER_USER does not own this tag (uploader=2)).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/setups_untag_test.py` around lines 71 - 92, The test
test_setup_untag_not_owned_by_you uses OWNER_USER while inserting the setup_tag
row with uploader=2 which can be confusing; update the test to clarify that
OWNER_USER does not own the tag by either using a different user constant or
adding a one-line comment above the INSERT explaining "OWNER_USER does not own
this tag (uploader=2)", referencing the test function name
test_setup_untag_not_owned_by_you and the untag_setup call so reviewers see the
intent and the setup_tag insert parameters clearly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/routers/openml/setups_untag_test.py`:
- Around line 71-92: The test test_setup_untag_not_owned_by_you uses OWNER_USER
while inserting the setup_tag row with uploader=2 which can be confusing; update
the test to clarify that OWNER_USER does not own the tag by either using a
different user constant or adding a one-line comment above the INSERT explaining
"OWNER_USER does not own this tag (uploader=2)", referencing the test function
name test_setup_untag_not_owned_by_you and the untag_setup call so reviewers see
the intent and the setup_tag insert parameters clearly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2098474a-6a85-482e-baf7-7201576b9afd

📥 Commits

Reviewing files that changed from the base of the PR and between 8818ff5 and 98ed609.

📒 Files selected for processing (1)
  • tests/routers/openml/setups_untag_test.py

Copy link
Copy Markdown
Contributor

@PGijsbers PGijsbers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just one suggestion on removing one test case and more clearly rephrasing the leftover test case.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
tests/routers/openml/setups_untag_test.py (1)

49-61: Anchor the exception-message matches.

Line 50 currently accepts any numeric setup id, and the three match= patterns are all unanchored. Since these tests pass fixed inputs, tightening the patterns will catch message regressions more reliably.

♻️ Suggested diff
-    with pytest.raises(SetupNotFoundError, match=r"Setup \d+ not found."):
+    with pytest.raises(SetupNotFoundError, match=r"^Setup 999999 not found\.$"):
         await untag_setup(
             setup_id=999999,
             tag="test_tag",
             user=SOME_USER,
             expdb_db=expdb_test,
         )
@@
-    with pytest.raises(TagNotFoundError, match=rf"Setup 1 does not have tag '{tag}'\."):
+    with pytest.raises(TagNotFoundError, match=rf"^Setup 1 does not have tag '{tag}'\.$"):
         await untag_setup(
             setup_id=1,
             tag=tag,
             user=SOME_USER,
             expdb_db=expdb_test,
         )
@@
-        match=rf"You may not remove tag '{tag}' of setup 1 because it was not created by you\.",
+        match=rf"^You may not remove tag '{tag}' of setup 1 because it was not created by you\.$",
     ):
         await untag_setup(
             setup_id=1,
             tag=tag,
             user=OWNER_USER,

Also applies to: 77-80

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/routers/openml/setups_untag_test.py` around lines 49 - 61, Update the
regex assertions in tests to anchor the expected exception messages so they
match exactly; in test_setup_untag_unknown_setup change the match for
SetupNotFoundError to anchor the numeric id pattern (e.g. use ^Setup \d+ not
found\.$ or \ASetup \d+ not found\.\Z) and in test_setup_untag_tag_not_found
anchor the TagNotFoundError pattern to exactly match "Setup 1 does not have tag
'non_existent_tag_12345'." (e.g. ^Setup 1 does not have tag
'non_existent_tag_12345'\.$), and apply the same anchoring fix to the similar
assertions around lines 77-80; locate these in the tests invoking untag_setup
and the SetupNotFoundError/TagNotFoundError assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@tests/routers/openml/setups_untag_test.py`:
- Around line 49-61: Update the regex assertions in tests to anchor the expected
exception messages so they match exactly; in test_setup_untag_unknown_setup
change the match for SetupNotFoundError to anchor the numeric id pattern (e.g.
use ^Setup \d+ not found\.$ or \ASetup \d+ not found\.\Z) and in
test_setup_untag_tag_not_found anchor the TagNotFoundError pattern to exactly
match "Setup 1 does not have tag 'non_existent_tag_12345'." (e.g. ^Setup 1 does
not have tag 'non_existent_tag_12345'\.$), and apply the same anchoring fix to
the similar assertions around lines 77-80; locate these in the tests invoking
untag_setup and the SetupNotFoundError/TagNotFoundError assertions.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 80e9084c-0b38-4f65-a491-54b55f8df404

📥 Commits

Reviewing files that changed from the base of the PR and between 98ed609 and eb289bd.

📒 Files selected for processing (1)
  • tests/routers/openml/setups_untag_test.py

@igennova igennova requested a review from PGijsbers April 2, 2026 18:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants